Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such counterfactual data often has a limited scale and diversity if crowdsourced and is computationally expensive to extend to new perturbation types if generated using supervised methods. To address this, we introduce a new framework called DISCO for automatically generating high-quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a task-specific teacher model filters the generation to distill high-quality counterfactual data. We show that learning with this counterfactual data yields a comparatively small student model that is 6% (absolute) more robust and generalizes 5% better across distributions than baselines on various challenging evaluations. This model is also 15% more sensitive in differentiating original and counterfactual examples, on three evaluation sets written by human workers and via human-AI collaboration.
translated by 谷歌翻译
In this paper, we present kogito, an open-source tool for generating commonsense inferences about situations described in text. kogito provides an intuitive and extensible interface to interact with natural language generation models that can be used for hypothesizing commonsense knowledge inference from a textual input. In particular, kogito offers several features for targeted, multi-granularity knowledge generation. These include a standardized API for training and evaluating knowledge models, and generating and filtering inferences from them. We also include helper functions for converting natural language texts into a format ingestible by knowledge models - intermediate pipeline stages such as knowledge head extraction from text, heuristic and model-based knowledge head-relation matching, and an ability to define and use custom knowledge relations. We make the code for kogito available at https://github.com/epfl-nlp/kogito along with thorough documentation at https://kogito.readthedocs.io.
translated by 谷歌翻译
即使是最大的神经网络也会出错,随着世界的变化,曾经纠正的预测可能变得无效。模型编辑器对基础模型(预训练)模型的行为进行本地更新,以注入更新的知识或纠正不良行为。现有的模型编辑已经显示出希望,但也没有足够的表现力:他们难以准确地对编辑的预期范围进行建模(受编辑影响的示例),从而导致与编辑相关的测试输入的预测不准确,并且经常在之后完全失败。许多编辑。作为一个较高容量的替代方案,我们建议使用检索型反面模型(SERAC)提出半参数编辑,该模型(SERAC)存储在明确的内存中,并学会对它们进行推理以根据需要调节基本模型的预测。为了实现对模型编辑器的更严格评估,我们介绍了三个具有挑战性的语言模型编辑问题,基于问题回答,事实检查和对话生成。我们发现,只有SERAC才能在所有三个问题上实现高性能,从而超过了现有的方法,可以通过大量利润进行模型编辑。代码,数据和其他项目信息将在https://sites.google.com/view/serac-editing上提供。
translated by 谷歌翻译
尽管大型预训练的模型已在各种下游任务上取得了令人印象深刻的结果,但最大的现有模型仍然会犯错,甚至准确的预测可能会随着时间的流逝而过时。因为在训练时间检测所有此类故障是不可能的,因此可以使此类模型的开发人员和最终用户能够纠正不准确的输出,同时希望将模型保持完整。但是,大型神经网络学到的表示形式的分布式黑盒性质使得产生这种目标编辑困难。如果仅出现单个有问题的输入和新的所需输出,则微调方法往往过于fit。当应用于非常大的模型时,其他编辑算法在计算上是不可行的,要么简单地无效。为了启用大规模的简单事后编辑,我们建议使用梯度分解(MEND)提出模型编辑器网络,该网络是一个小型辅助编辑网络的集合,该网络使用单个所需的输入输出对将快速的本地编辑对预先训练的模型进行快速的本地编辑。行为。 MEND学习使用标准微调获得的梯度,使用梯度的低排放分解来使该转换可牵引的参数化。即使在100亿+参数模型中,也可以在不到一天的时间内对单个GPU进行修订;经过训练的修补后,可以将新编辑快速应用于预训练的模型。我们对T5,GPT,BERT和BART模型的实验表明,MEND是模型编辑的唯一方法,该方法有效地编辑了具有超过100亿参数的模型的行为。代码和数据可在https://sites.google.com/view/mend-editing。
translated by 谷歌翻译
AI正在经历范式转变,随着模型的兴起(例如Bert,Dall-E,GPT-3),这些模型经过大规模的数据训练,并且可以适应广泛的下游任务。我们称这些模型基础模型来强调其至关重要但不完整的特征。该报告提供了基础模型的机会和风险的详尽说明,包括其功能(例如语言,愿景,机器人技术,推理,人类互动)和技术原则(例如,模型架构,培训程序,数据,系统,安全,安全性,评估,理论)对其应用(例如法律,医疗保健,教育)和社会影响(例如不平等,滥用,经济和环境影响,法律和道德考虑)。尽管基础模型基于标准的深度学习和转移学习,但它们的规模导致了新的新兴能力,以及它们在许多任务中的有效性都激发了同质化。同质化提供了强大的杠杆作用,但要求谨慎,因为基础模型的缺陷均由下游的所有适应模型继承。尽管即将广泛地部署基础模型,但我们目前对它们的工作方式,失败以及由于其新兴属性的影响而缺乏清晰的了解。为了解决这些问题,我们认为基础模型的许多批判性研究都需要与他们的基本社会技术性质相称。
translated by 谷歌翻译
使用从预先接受训练的语言模型(LMS)和知识图表(LMS)和知识图表(kgs)回答问题的问题提出了两个挑战:给定QA上下文(问答选择),方法需要(i)从大型千克识别相关知识,(ii)对QA上下文和kg进行联合推理。在这项工作中,我们提出了一种新的模型,QA-GNN,它通过两个关键创新解决了上述挑战:(i)相关评分,我们使用LMS来估计KG节点相对于给定的QA上下文的重要性,以及(ii)联合推理,我们将QA上下文和kg连接到联合图,并通过图形神经网络相互更新它们的表示。我们评估了QA基准的模型(CommanSeaseQA,OpenBookQA)和生物医学(MedQa-USMLE)域名。QA-GNN优于现有的LM和LM + kg模型,并表现出可解释和结构化推理的能力,例如,正确处理问题的否定。
translated by 谷歌翻译
近年来带来了对自然语言理解领域的勤义代表和推理的重新兴趣。新的致辞知识图表(CSKG)的发展是这些进步的核心,因为他们的不同事实可以通过机器学习模型来解决新的和具有挑战性的任务。与此同时,由于全面地涵盖了一般勤杂朗知识所需的大规模规模,对这些资源的质量和覆盖率仍存在疑问。在这项工作中,我们将手动构建的CSKGS分配在NLP代理商遇到的所有情况下,我们将永远不会实现适用所需的覆盖范围。因此,我们提出了一种新的评估框架,用于测试KGS的效用,基于如何从中学习有效的隐式知识表示。通过这一新目标,我们提出了一个含有知识的全新CSKG的新CSKG,该知识不容易获得预用的语言模型。我们与其他领先的CSKG相比,评估其属性,表现了对勤杂朗语言知识资源的第一个大规模对研究。接下来,我们显示原子2020更适合培训知识模型,可以为新的,看不见的实体和事件产生准确,代表知识。最后,通过人类评估,我们表明,尽管使用超过430倍的参数,但GPT-3(175B参数)的几次射击性能较低,而令人印象深刻,令人印象深刻,令人印象深刻,令人印象深刻,仍然低于原子型2020的巴特的知识模型。
translated by 谷歌翻译
We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and Con-ceptNet (Speer et al., 2017). Contrary to many conventional KBs that store knowledge with canonical templates, commonsense KBs only store loosely structured open-text descriptions of knowledge. We posit that an important step toward automatic commonsense completion is the development of generative models of commonsense knowledge, and propose COMmonsEnse Transformers (COMET ) that learn to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of commonsense modeling, our investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs. Empirical results demonstrate that COMET is able to generate novel knowledge that humans rate as high quality, with up to 77.5% (ATOMIC) and 91.7% (ConceptNet) precision at top 1, which approaches human performance for these resources. Our findings suggest that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
The increasingly widespread adoption of large language models has highlighted the need for improving their explainability. We present context length probing, a novel explanation technique for causal language models, based on tracking the predictions of a model as a function of the length of available context, and allowing to assign differential importance scores to different contexts. The technique is model-agnostic and does not rely on access to model internals beyond computing token-level probabilities. We apply context length probing to large pre-trained language models and offer some initial analyses and insights, including the potential for studying long-range dependencies. The source code and a demo of the method are available.
translated by 谷歌翻译